Unsupervised Domain Adaptation (UDA) is an effective approach to tackle the issue of domain shift. Specifically, UDA methods try to align the source and target representations to improve the generalization on the target domain. Further, UDA methods work under the assumption that the source data is accessible during the adaptation process. However, in real-world scenarios, the labelled source data is often restricted due to privacy regulations, data transmission constraints, or proprietary data concerns. The Source-Free Domain Adaptation (SFDA) setting aims to alleviate these concerns by adapting a source-trained model for the target domain without requiring access to the source data. In this paper, we explore the SFDA setting for the task of adaptive object detection. To this end, we propose a novel training strategy for adapting a source-trained object detector to the target domain without source data. More precisely, we design a novel contrastive loss to enhance the target representations by exploiting the objects relations for a given target domain input. These object instance relations are modelled using an Instance Relation Graph (IRG) network, which are then used to guide the contrastive representation learning. In addition, we utilize a student-teacher based knowledge distillation strategy to avoid overfitting to the noisy pseudo-labels generated by the source-trained model. Extensive experiments on multiple object detection benchmark datasets show that the proposed approach is able to efficiently adapt source-trained object detectors to the target domain, outperforming previous state-of-the-art domain adaptive detection methods. Code is available at https://github.com/Vibashan/irg-sfda.
translated by 谷歌翻译
In image fusion, images obtained from different sensors are fused to generate a single image with enhanced information. In recent years, state-of-the-art methods have adopted Convolution Neural Networks (CNNs) to encode meaningful features for image fusion. Specifically, CNN-based methods perform image fusion by fusing local features. However, they do not consider long-range dependencies that are present in the image. Transformer-based models are designed to overcome this by modeling the long-range dependencies with the help of self-attention mechanism. This motivates us to propose a novel Image Fusion Transformer (IFT) where we develop a transformer-based multi-scale fusion strategy that attends to both local and long-range information (or global context). The proposed method follows a two-stage training approach. In the first stage, we train an auto-encoder to extract deep features at multiple scales. In the second stage, multi-scale features are fused using a Spatio-Transformer (ST) fusion strategy. The ST fusion blocks are comprised of a CNN and a transformer branch which capture local and long-range features, respectively. Extensive experiments on multiple benchmark datasets show that the proposed method performs better than many competitive fusion algorithms. Furthermore, we show the effectiveness of the proposed ST fusion strategy with an ablation analysis. The source code is available at: https://github.com/Vibashan/Image-Fusion-Transformer.
translated by 谷歌翻译
开普勒和苔丝任务产生了超过100,000个潜在的传输信号,必须处理,以便创建行星候选的目录。在过去几年中,使用机器学习越来越感兴趣,以分析这些数据以寻找新的外延网。与现有的机器学习作品不同,exoMiner,建议的深度学习分类器在这项工作中,模仿域专家如何检查诊断测试以VET传输信号。 exoMiner是一种高度准确,可说明的和强大的分类器,其中1)允许我们验证来自桅杆开口存档的301个新的外延网,而2)是足够的,足以应用于诸如正在进行的苔丝任务的任务中应用。我们进行了广泛的实验研究,以验证exoMiner在不同分类和排名指标方面比现有的传输信号分类器更可靠,准确。例如,对于固定精度值为99%,exoMiner检索测试集中的93.6%的所有外产网(即,召回= 0.936),而最佳现有分类器的速率为76.3%。此外,exoMiner的模块化设计有利于其解释性。我们介绍了一个简单的解释性框架,提供了具有反馈的专家,为什么exoMiner将运输信号分类为特定类标签(例如,行星候选人或不是行星候选人)。
translated by 谷歌翻译
使用生成对抗网络(GAN)生成的面孔已经达到了前所未有的现实主义。这些面孔,也称为“深色伪造”,看起来像是逼真的照片,几乎没有像素级扭曲。尽管某些工作使能够培训模型,从而导致该主题的特定属性,但尚未完全探索基于自然语言描述的面部图像。对于安全和刑事识别,提供基于GAN的系统的能力像素描艺术家一样有用。在本文中,我们提出了一种新颖的方法,可以从语义文本描述中生成面部图像。学习的模型具有文本描述和面部类型的轮廓,该模型用于绘制功能。我们的模型是使用仿射组合模块(ACM)机制训练的,以使用自发动矩阵结合伯特和甘恩潜在空间的文本。这避免了由于“注意力”不足而导致的功能丧失,如果简单地将文本嵌入和潜在矢量串联,这可能会发生。我们的方法能够生成非常准确地与面部面部的详尽文本描述相符的图像,并具有许多细节的脸部特征,并有助于生成更好的图像。如果提供了其他文本描述或句子,则提出的方法还能够对先前生成的图像进行增量更改。
translated by 谷歌翻译